11 research outputs found

    Unification-based Reconstruction of Multi-hop Explanations for Science Questions

    Get PDF
    This paper presents a novel framework for reconstructing multi-hop explanations in science Question Answering (QA). While existing approaches for multi-hop reasoning build explanations considering each question in isolation, we propose a method to leverage explanatory patterns emerging in a corpus of scientific explanations. Specifically, the framework ranks a set of atomic facts by integrating lexical relevance with the notion of unification power, estimated analysing explanations for similar questions in the corpus. An extensive evaluation is performed on the Worldtree corpus, integrating k-NN clustering and Information Retrieval (IR) techniques. We present the following conclusions: (1) The proposed method achieves results competitive with Transformers, yet being orders of magnitude faster, a feature that makes it scalable to large explanatory corpora (2) The unification-based mechanism has a key role in reducing semantic drift, contributing to the reconstruction of many hops explanations (6 or more facts) and the ranking of complex inference facts (+12.0 Mean Average Precision) (3) Crucially, the constructed explanations can support downstream QA models, improving the accuracy of BERT by up to 10% overall.Comment: Accepted at EACL 202

    Case-based Abductive Natural Language Inference

    Full text link
    Existing accounts of explanation emphasise the role of prior experience in the solution of new problems. However, most of the contemporary models for multi-hop textual inference construct explanations considering each test case in isolation. This paradigm is known to suffer from semantic drift, which causes the construction of spurious explanations leading to wrong conclusions. In contrast, we investigate an abductive framework for explainable multi-hop inference that adopts the retrieve-reuse-revise paradigm largely studied in case-based reasoning. Specifically, we present a novel framework that addresses and explains unseen inference problems by retrieving and adapting prior natural language explanations from similar training examples. We empirically evaluate the case-based abductive framework on downstream commonsense and scientific reasoning tasks. Our experiments demonstrate that the proposed framework can be effectively integrated with sparse and dense pre-trained encoding mechanisms or downstream transformers, achieving strong performance when compared to existing explainable approaches. Moreover, we study the impact of the retrieve-reuse-revise paradigm on explainability and semantic drift, showing that it boosts the quality of the constructed explanations, resulting in improved downstream inference performance

    Identifying Supporting Facts for Multi-hop Question Answering with Document Graph Networks

    No full text
    Recent advances in reading comprehension have resulted in models that surpass human performance when the answer is contained in a single, continuous passage of text. However, complex Question Answering (QA) typically requires multi-hop reasoning - i.e. the integration of supporting facts from different sources, to infer the correct answer. This paper proposes Document Graph Network (DGN), a message passing architecture for the identification of supporting facts over a graph-structured representation of text. The evaluation on HotpotQA shows that DGN obtains competitive results when compared to a reading comprehension baseline operating on raw text, confirming the relevance of structured representations for supporting multi-hop reasoning

    Hybrid Autoregressive Inference for Scalable Multi-Hop Explanation Regeneration

    No full text
    Regenerating natural language explanations in the scientific domain has been proposed as a benchmark to evaluate complex multi-hop and explainable inference. In this context, large language models can achieve state-of-the-art performance when employed as cross-encoder architectures and fine-tuned on human-annotated explanations. However, while much attention has been devoted to the quality of the explanations, the problem of performing inference efficiently is largely under studied. Cross-encoders, in fact, are intrinsically not scalable, possessing limited applicability to real-world scenarios that require inference on massive facts banks. To enable complex multi-hop reasoning at scale, this paper focuses on bi-encoder architectures, investigating the problem of scientific explanation regeneration at the intersection of dense and sparse models. Specifically, we present SCAR (for Scalable Autoregressive Inference), a hybrid framework that iteratively combines a Transformer-based bi-encoder with a sparse model of explanatory power, designed to leverage explicit inference patterns in the explanations. Our experiments demonstrate that the hybrid framework significantly outperforms previous sparse models, achieving performance comparable with that of state-of-the-art cross-encoders while being approx 50 times faster and scalable to corpora of millions of facts. Further analyses on semantic drift and multi-hop question answering reveal that the proposed hybridisation boosts the quality of the most challenging explanations, contributing to improved performance on downstream inference tasks
    corecore